Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
The Journal of Health Administration Education ; 39(2):303-310, 2023.
Article in English | ProQuest Central | ID: covidwho-2250604

ABSTRACT

The COVID pandemic exacerbated feeling of isolation for students in George Mason University's (GMU) Health Informatics Program and challenged their interactions in many of the program's online classes. This study tested ways to improve the experiences of students and faculty with online courses. Twentytwo undergraduate students enrolled in an 8-week Electronic Health Record Configuration and Data Analysis (HAP 464) course participated in the study. Three changes were made to the structure of the course. First, the course was taught as a "flipped" online course;second, students were required to collaborate on assignments (not exams), third, the instructor trained peer-instructors one on one, and these instructors taught the rest of the students in small group sessions. The results indicated that interaction among the students increased: 75% of students were talking between 80% to 100% of class time. The classes were intensely interactive, unlike traditional class lectures. All 22 students received a grade of B or better, indicating that intense online interaction did not reduce the rigor of the training.

2.
PLOS global public health ; 2(7), 2022.
Article in English | EuropePMC | ID: covidwho-2248894

ABSTRACT

This study uses two existing data sources to examine how patients' symptoms can be used to differentiate COVID-19 from other respiratory diseases. One dataset consisted of 839,288 laboratory-confirmed, symptomatic, COVID-19 positive cases reported to the Centers for Disease Control and Prevention (CDC) from March 1, 2019, to September 30, 2020. The second dataset provided the controls and included 1,814 laboratory-confirmed influenza positive, symptomatic cases, and 812 cases with symptomatic influenza-like-illnesses. The controls were reported to the Influenza Research Database of the National Institute of Allergy and Infectious Diseases (NIAID) between January 1, 2000, and December 30, 2018. Data were analyzed using case-control study design. The comparisons were done using 45 scenarios, with each scenario making different assumptions regarding prevalence of COVID-19 (2%, 4%, and 6%), influenza (0.01%, 3%, 6%, 9%, 12%) and influenza-like-illnesses (1%, 3.5% and 7%). For each scenario, a logistic regression model was used to predict COVID-19 from 2 demographic variables (age, gender) and 10 symptoms (cough, fever, chills, diarrhea, nausea and vomiting, shortness of breath, runny nose, sore throat, myalgia, and headache). The 5-fold cross-validated Area under the Receiver Operating Curves (AROC) was used to report the accuracy of these regression models. The value of various symptoms in differentiating COVID-19 from influenza depended on a variety of factors, including (1) prevalence of pathogens that cause COVID-19, influenza, and influenza-like-illness;(2) age of the patient, and (3) presence of other symptoms. The model that relied on 5-way combination of symptoms and demographic variables, age and gender, had a cross-validated AROC of 90%, suggesting that it could accurately differentiate influenza from COVID-19. This model, however, is too complex to be used in clinical practice without relying on computer-based decision aid. Study results encourage development of web-based, stand-alone, artificial Intelligence model that can interview patients and help clinicians make quarantine and triage decisions.

3.
Cureus ; 15(2): e35110, 2023 Feb.
Article in English | MEDLINE | ID: covidwho-2268288

ABSTRACT

Objective To estimate the multiple direct/indirect effects of social, environmental, and economic factors on COVID-19 vaccination rates (series complete) in the 3109 continental counties in the United States (U.S.). Study design  The dependent variable was the COVID-19 vaccination rates in the U.S. (April 15, 2022). Independent variables were collected from reliable secondary data sources, including the Census and CDC. Independent variables measured at two different time frames were utilized to predict vaccination rates. The number of vaccination sites in a given county was calculated using the geographic information system (GIS) packages as of April 9, 2022. The Internet Archive (Way Back Machine) was used to look up data for historical dates. Methods  A chain of temporally-constrained least absolute shrinkage and selection operator (LASSO) regressions was used to identify direct and indirect effects on vaccination rates. The first regression identified direct predictors of vaccination rates. Next, the direct predictors were set as response variables in subsequent regressions and regressed on variables that occurred before them. These regressions identified additional indirect predictors of vaccination. Finally, both direct and indirect variables were included in a network model. Results  Fifteen variables directly predicted vaccination rates and explained 43% of the variation in vaccination rates in April 2022. In addition, 11 variables indirectly affected vaccination rates, and their influence on vaccination was mediated by direct factors. For example, children in poverty rate mediated the effect of (a) median household income, (b) children in single-parent homes, and (c) income inequality. For another example, median household income mediated the effect of (a) the percentage of residents under the age of 18, (b) the percentage of residents who are Asian, (c) home ownership, and (d) traffic volume in the prior year. Our findings describe not only the direct but also the indirect effect of variables. Conclusions  A diverse set of demographics, social determinants, public health status, and provider characteristics predicted vaccination rates. Vaccination rates change systematically and are affected by the demographic composition and social determinants of illness within the county. One of the merits of our study is that it shows how the direct predictors of vaccination rates could be mediators of the effects of other variables.

4.
Qual Manag Health Care ; 32(Suppl 1): S29-S34, 2023.
Article in English | MEDLINE | ID: covidwho-2246371

ABSTRACT

BACKGROUND AND OBJECTIVES: COVID-19 symptoms change after onset-some show early, others later. This article examines whether the order of occurrence of symptoms can improve diagnosis of COVID-19 before test results are available. METHODS: In total, 483 individuals who completed a COVID-19 test were recruited through Listservs. Participants then completed an online survey regarding their symptoms and test results. The order of symptoms was set according to (a) whether the participant had a "history of the symptom" due to a prior condition; and (b) whether the symptom "occurred first," or prior to, other symptoms of COVID-19. Two LASSO (Least Absolute Shrinkage and Selection Operator) regression models were developed. The first model, referred to as "time-invariant," used demographics and symptoms but not the order of symptom occurrence. The second model, referred to as "time-sensitive," used the same data set but included the order of symptom occurrence. RESULTS: The average cross-validated area under the receiver operating characteristic (AROC) curve for the time-invariant model was 0.784. The time-sensitive model had an AROC curve of 0.799. The difference between the 2 accuracy levels was statistically significant (α < .05). CONCLUSION: The order of symptom occurrence made a statistically significant, but small, improvement in the accuracy of the diagnosis of COVID-19.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , SARS-CoV-2 , ROC Curve
5.
Qual Manag Health Care ; 32(Suppl 1): S21-S28, 2023.
Article in English | MEDLINE | ID: covidwho-2246370

ABSTRACT

BACKGROUND AND OBJECTIVE: COVID-19 manifests with a broad range of symptoms. This study investigates whether clusters of respiratory, gastrointestinal, or neurological symptoms can be used to diagnose COVID-19. METHODS: We surveyed symptoms of 483 subjects who had completed COVID-19 laboratory tests in the last 30 days. The survey collected data on demographic characteristics, self-reported symptoms for different types of infections within 14 days of onset of illness, and self-reported COVID-19 test results. Robust LASSO regression was used to create 3 nested models. In all 3 models, the response variable was the COVID-19 test result. In the first model, referred to as the "main effect model," the independent variables were demographic characteristics, history of chronic symptoms, and current symptoms. The second model, referred to as the "hierarchical clustering model," added clusters of variables to the list of independent variables. These clusters were established through hierarchical clustering. The third model, referred to as the "interaction-terms model," also added clusters of variables to the list of independent variables; this time clusters were established through pairwise and triple-way interaction terms. Models were constructed on a randomly selected 80% of the data and accuracy was cross-validated on the remaining 20% of the data. The process was bootstrapped 30 times. Accuracy of the 3 models was measured using the average of the cross-validated area under the receiver operating characteristic curves (AUROCs). RESULTS: In 30 bootstrap samples, the main effect model had an AUROC of 0.78. The hierarchical clustering model had an AUROC of 0.80. The interaction-terms model had an AUROC of 0.81. Both the hierarchical cluster model and the interaction model were significantly different from the main effect model (α = .04). Patients with different races/ethnicities, genders, and ages presented with different symptom clusters. CONCLUSIONS: Using clusters of symptoms, it is possible to more accurately diagnose COVID-19 among symptomatic patients.


Subject(s)
COVID-19 , Humans , Male , Female , COVID-19/epidemiology , Triage , Syndrome , ROC Curve , Patients
6.
Qual Manag Health Care ; 32(Suppl 1): S11-S20, 2023.
Article in English | MEDLINE | ID: covidwho-2238511

ABSTRACT

BACKGROUND AND OBJECTIVE: At-home rapid antigen tests provide a convenient and expedited resource to learn about severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection status. However, low sensitivity of at-home antigen tests presents a challenge. This study examines the accuracy of at-home tests, when combined with computer-facilitated symptom screening. METHODS: The study used primary data sources with data collected during 2 phases at different periods (phase 1 and phase 2): one during the period in which the alpha variant of SARS-CoV-2 was predominant in the United States and another during the surge of the delta variant. Four hundred sixty-one study participants were included in the analyses from phase 1 and 374 subjects from phase 2. Phase 1 data were used to develop a computerized symptom screening tool, using ordinary logistic regression with interaction terms, which predicted coronavirus disease-2019 (COVID-19) reverse transcription polymerase chain reaction (RT-PCR) test results. Phase 2 data were used to validate the accuracy of predicting COVID-19 diagnosis with (1) computerized symptom screening; (2) at-home rapid antigen testing; (3) the combination of both screening methods; and (4) the combination of symptom screening and vaccination status. The McFadden pseudo-R2 was used as a measure of percentage of variation in RT-PCR test results explained by the various screening methods. RESULTS: The McFadden pseudo-R2 for the first at-home test, the second at-home test, and computerized symptom screening was 0.274, 0.140, and 0.158, respectively. Scores between 0.2 and 0.4 indicated moderate levels of accuracy. The first at-home test had low sensitivity (0.587) and high specificity (0.989). Adding a second at-home test did not improve the sensitivity of the first test. Computerized symptom screening improved the accuracy of the first at-home test (added 0.131 points to sensitivity and 6.9% to pseudo-R2 of the first at-home test). Computerized symptom screening and vaccination status was the most accurate method to screen patients for COVID-19 or an active infection with SARS-CoV-2 in the community (pseudo-R2 = 0.476). CONCLUSION: Computerized symptom screening could either improve, or in some situations, replace at-home antigen tests for those individuals experiencing COVID-19 symptoms.


Subject(s)
COVID-19 , Humans , COVID-19/diagnosis , COVID-19/epidemiology , SARS-CoV-2 , COVID-19 Testing , Sensitivity and Specificity
7.
Qual Manag Health Care ; 32(Suppl 1): S3-S10, 2023.
Article in English | MEDLINE | ID: covidwho-2191200

ABSTRACT

BACKGROUND AND OBJECTIVES: This article describes how multisystemic symptoms, both respiratory and nonrespiratory, can be used to differentiate coronavirus disease-2019 (COVID-19) from other diseases at the point of patient triage in the community. The article also shows how combinations of symptoms could be used to predict the probability of a patient having COVID-19. METHODS: We first used a scoping literature review to identify symptoms of COVID-19 reported during the first year of the global pandemic. We then surveyed individuals with reported symptoms and recent reverse transcription polymerase chain reaction (RT-PCR) test results to assess the accuracy of diagnosing COVID-19 from reported symptoms. The scoping literature review, which included 81 scientific articles published by February 2021, identified 7 respiratory, 9 neurological, 4 gastrointestinal, 4 inflammatory, and 5 general symptoms associated with COVID-19 diagnosis. The likelihood ratio associated with each symptom was estimated from sensitivity and specificity of symptoms reported in the literature. A total of 483 individuals were then surveyed to validate the accuracy of predicting COVID-19 diagnosis based on patient symptoms using the likelihood ratios calculated from the literature review. Survey results were weighted to reflect age, gender, and race of the US population. The accuracy of predicting COVID-19 diagnosis from patient-reported symptoms was assessed using area under the receiver operating curve (AROC). RESULTS: In the community, cough, sore throat, runny nose, dyspnea, and hypoxia, by themselves, were not good predictors of COVID-19 diagnosis. A combination of cough and fever was also a poor predictor of COVID-19 diagnosis (AROC = 0.56). The accuracy of diagnosing COVID-19 based on symptoms was highest when individuals presented with symptoms from different body systems (AROC of 0.74-0.81); the lowest accuracy was when individuals presented with only respiratory symptoms (AROC = 0.48). CONCLUSIONS: There are no simple rules that clinicians can use to diagnose COVID-19 in the community when diagnostic tests are unavailable or untimely. However, triage of patients to appropriate care and treatment can be improved by reviewing the combinations of certain types of symptoms across body systems.


Subject(s)
COVID-19 , Humans , Cough/diagnosis , Cough/etiology , COVID-19/diagnosis , COVID-19 Testing , SARS-CoV-2 , Triage
8.
PLOS Glob Public Health ; 2(7): e0000221, 2022.
Article in English | MEDLINE | ID: covidwho-2021475

ABSTRACT

This study uses two existing data sources to examine how patients' symptoms can be used to differentiate COVID-19 from other respiratory diseases. One dataset consisted of 839,288 laboratory-confirmed, symptomatic, COVID-19 positive cases reported to the Centers for Disease Control and Prevention (CDC) from March 1, 2019, to September 30, 2020. The second dataset provided the controls and included 1,814 laboratory-confirmed influenza positive, symptomatic cases, and 812 cases with symptomatic influenza-like-illnesses. The controls were reported to the Influenza Research Database of the National Institute of Allergy and Infectious Diseases (NIAID) between January 1, 2000, and December 30, 2018. Data were analyzed using case-control study design. The comparisons were done using 45 scenarios, with each scenario making different assumptions regarding prevalence of COVID-19 (2%, 4%, and 6%), influenza (0.01%, 3%, 6%, 9%, 12%) and influenza-like-illnesses (1%, 3.5% and 7%). For each scenario, a logistic regression model was used to predict COVID-19 from 2 demographic variables (age, gender) and 10 symptoms (cough, fever, chills, diarrhea, nausea and vomiting, shortness of breath, runny nose, sore throat, myalgia, and headache). The 5-fold cross-validated Area under the Receiver Operating Curves (AROC) was used to report the accuracy of these regression models. The value of various symptoms in differentiating COVID-19 from influenza depended on a variety of factors, including (1) prevalence of pathogens that cause COVID-19, influenza, and influenza-like-illness; (2) age of the patient, and (3) presence of other symptoms. The model that relied on 5-way combination of symptoms and demographic variables, age and gender, had a cross-validated AROC of 90%, suggesting that it could accurately differentiate influenza from COVID-19. This model, however, is too complex to be used in clinical practice without relying on computer-based decision aid. Study results encourage development of web-based, stand-alone, artificial Intelligence model that can interview patients and help clinicians make quarantine and triage decisions.

9.
Qual Manag Health Care ; 31(2): 85-91, 2022.
Article in English | MEDLINE | ID: covidwho-1709460

ABSTRACT

BACKGROUND: The importance of various patient-reported signs and symptoms to the diagnosis of coronavirus disease 2019 (COVID-19) changes during, and outside, of the flu season. None of the current published studies, which focus on diagnosis of COVID-19, have taken this seasonality into account. OBJECTIVE: To develop predictive algorithm, which estimates the probability of having COVID-19 based on symptoms, and which incorporates the seasonality and prevalence of influenza and influenza-like illness data. METHODS: Differential diagnosis of COVID-19 and influenza relies on demographic characteristics (age, race, and gender), and respiratory (eg, fever, cough, and runny nose), gastrointestinal (eg, diarrhea, nausea, and loss of appetite), and neurological (eg, anosmia and headache) signs and symptoms. The analysis was based on the symptoms reported by COVID-19 patients, 774 patients in China and 273 patients in the United States. The analysis also included 2885 influenza and 884 influenza-like illnesses in US patients. Accuracy of the predictions was calculated using the average area under the receiver operating characteristic (AROC) curves. RESULTS: The likelihood ratio for symptoms, such as cough, depended on the flu season-sometimes indicating COVID-19 and other times indicating the reverse. In 30-fold cross-validated data, the symptoms accurately predicted COVID-19 (AROC of 0.79), showing that symptoms can be used to screen patients in the community and prior to testing. CONCLUSION: Community-based health care providers should follow different signs and symptoms for diagnosing COVID-19 during, and outside of, influenza season.


Subject(s)
COVID-19 , Influenza, Human , COVID-19/diagnosis , COVID-19/epidemiology , Humans , Influenza, Human/diagnosis , Influenza, Human/epidemiology , Prevalence , Probability , SARS-CoV-2
SELECTION OF CITATIONS
SEARCH DETAIL